- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources6
- Resource Type
-
0006000000000000
- More
- Availability
-
60
- Author / Contributor
- Filter by Author / Creator
-
-
Kang, Mintong (6)
-
Li, Bo (6)
-
Li, Linyi (4)
-
Chaudhury, Bhaskar Ray (2)
-
Gürel, Nezihe Merve (2)
-
Mehta, Ruta (2)
-
Lin, Zhen (1)
-
Liu, Yang (1)
-
Song, Dawn (1)
-
Sun, Jimeng (1)
-
Weber, Maurice (1)
-
Xiao, Cao (1)
-
Yu, Ning (1)
-
Zhang, Ce (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Kang, Mintong; Gürel, Nezihe Merve; Yu, Ning; Song, Dawn; Li, Bo (, International Conference on Machine Learning (ICML 2024))
-
Kang, Mintong; Gürel, Nezihe Merve; Li, Linyi; Li, Bo (, In Proceedings of the Twelfth International Conference on Learning Representations (ICLR 2024))
-
Chaudhury, Bhaskar Ray; Li, Linyi; Kang, Mintong; Li, Bo; Mehta, Ruta (, Advances in neural information processing systems)
-
Chaudhury, Bhaskar Ray; Li, Linyi; Kang, Mintong; Li, Bo; Mehta, Ruta (, Advances in neural information processing systems)
-
Kang, Mintong; Li, Linyi; Weber, Maurice; Liu, Yang; Zhang, Ce; Li, Bo (, Neural Information Processing Systems (NeurIPS))Extensive efforts have been made to understand and improve the fairness of machine learning models based on observational metrics, especially in high-stakes domains such as medical insurance, education, and hiring decisions. However, there is a lack of certified fairness considering the end-to-end performance of an ML model. In this paper, we first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem based on the model performance loss bound on a fairness constrained distribution, which is within bounded distributional distance with the training distribution. We then propose a general fairness certification framework and instantiate it for both sensitive shifting and general shifting scenarios. In particular, we propose to solve the optimization problem by decomposing the original data distribution into analytical subpopulations and proving the convexity of the subproblems to solve them. We evaluate our certified fairness on six real-world datasets and show that our certification is tight in the sensitive shifting scenario and provides non-trivial certification under general shifting. Our framework is flexible to integrate additional non-skewness constraints and we show that it provides even tighter certification under different real-world scenarios. We also compare our certified fairness bound with adapted existing distributional robustness bounds on Gaussian data and demonstrate that our method is significantly tighter.more » « less
An official website of the United States government

Full Text Available